-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Improvement](scheduler) Use a separate eager queue to execute cancel… #45614
base: master
Are you sure you want to change the base?
Conversation
Thank you for your contribution to Apache Doris. Please clearly describe your PR:
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
|
run buildall |
clang-tidy review says "All clean, LGTM! 👍" |
TeamCity be ut coverage result: |
|
run buildall |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clang-tidy made some suggestions
TeamCity be ut coverage result: |
run buildall |
|
TeamCity be ut coverage result: |
run buildall |
|
TeamCity be ut coverage result: |
|
run buildall |
TeamCity be ut coverage result: |
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clang-tidy made some suggestions
be/src/pipeline/task_queue.cpp
Outdated
@@ -172,7 +178,7 @@ PipelineTask* MultiCoreTaskQueue::take(int core_id) { | |||
return task; | |||
} | |||
|
|||
PipelineTask* MultiCoreTaskQueue::_steal_take(int core_id) { | |||
std::shared_ptr<PipelineTask> MultiCoreTaskQueue::_steal_take(int core_id) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
warning: method '_steal_take' can be made const [readability-make-member-function-const]
std::shared_ptr<PipelineTask> MultiCoreTaskQueue::_steal_take(int core_id) { | |
std::shared_ptr<PipelineTask> MultiCoreTaskQueue::_steal_take(int core_id) const { |
be/src/pipeline/task_queue.h:126:
- std::shared_ptr<PipelineTask> _steal_take(int core_id);
+ std::shared_ptr<PipelineTask> _steal_take(int core_id) const;
run buildall |
TeamCity be ut coverage result: |
run buildall |
TPC-H: Total hot run time: 39918 ms
|
TeamCity be ut coverage result: |
TPC-DS: Total hot run time: 189933 ms
|
ClickBench: Total hot run time: 32.85 s
|
run buildall |
TPC-H: Total hot run time: 39954 ms
|
TPC-DS: Total hot run time: 189600 ms
|
ClickBench: Total hot run time: 32.22 s
|
TeamCity be ut coverage result: |
run buildall |
TeamCity be ut coverage result: |
auto expected = TaskState::VALID; | ||
if (!holder->state.compare_exchange_strong(expected, TaskState::RUNNING)) { | ||
if (expected == TaskState::RUNNING) { | ||
static_cast<void>(_task_queue.push_back(holder, index)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果状态already 是running了,说明他已经在另外一个队列里被执行了,应该由另外一个队列往里放。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不是的,这里是为了防止一个task重复进入调度队列,这里已经放进了预期执行的队列里,只是要等这个task在前一个执行线程退出。
run buildall |
run buildall |
TPC-H: Total hot run time: 32585 ms
|
TeamCity be ut coverage result: |
TPC-DS: Total hot run time: 196680 ms
|
ClickBench: Total hot run time: 31.52 s
|
…ed tasks
What problem does this PR solve?
Now, a pipeline task which is canceled will be put in runnable queue and release memory once it ran. However, a runnable queue may have too much tasks which leads to a unacceptable delay for this canceled task. So this PR use a separate queue to process all canceled tasks.
Release note
None
Check List (For Author)
Test
Behavior changed:
Does this need documentation?
Check List (For Reviewer who merge this PR)